103 research outputs found
Bamboo Wear and Its Application in Friction Material
Sliding wear behaviour of bamboo (Phyllostachys pubescens) was investigated in the cases of dry friction. The wear volume of bamboo was a function of the sliding velocity, the normal load and the relative orientation of bamboo fibres with respect to the friction surface. And tribological properties of the Bamboo Fiber Reinforced Friction Materials (BFRFMs) were tested on a constant speed friction tester. The results showed that the wear volume increased with the increase of sliding velocity and normal load. The normal-oriented specimens (N-type) showed sound wear resistance in comparison to the parallel-oriented ones (PS- and PI-type), and the outside surface layer (PS-type) showed sound resistance in comparison to the inner later (PI-typ). The friction coefficient of BFRFMs (reinforced with 3 wt.%, 6 wt.% and 9 wt.% bamboo fibers) were higher than those of the non-bamboo fiber reinforced friction material with identical ingredients mixed with and process conditions during the temperature-increasing procedure. The friction coefficients of the specimens containing 3 wt.% bamboo fibers were higher than that of other specimens. The wear rate of BFRFMs increased with the increasing of test temperature, and the wear rates of specimens containing 3 wt.% bamboo fibers were lower than that of others specimens
GFF: Gated Fully Fusion for Semantic Segmentation
Semantic segmentation generates comprehensive understanding of scenes through
densely predicting the category for each pixel. High-level features from Deep
Convolutional Neural Networks already demonstrate their effectiveness in
semantic segmentation tasks, however the coarse resolution of high-level
features often leads to inferior results for small/thin objects where detailed
information is important. It is natural to consider importing low level
features to compensate for the lost detailed information in high-level
features.Unfortunately, simply combining multi-level features suffers from the
semantic gap among them. In this paper, we propose a new architecture, named
Gated Fully Fusion (GFF), to selectively fuse features from multiple levels
using gates in a fully connected way. Specifically, features at each level are
enhanced by higher-level features with stronger semantics and lower-level
features with more details, and gates are used to control the propagation of
useful information which significantly reduces the noises during fusion. We
achieve the state of the art results on four challenging scene parsing datasets
including Cityscapes, Pascal Context, COCO-stuff and ADE20K.Comment: accepted by AAAI-2020(oral
Improving BERT with Self-Supervised Attention
One of the most popular paradigms of applying large pre-trained NLP models
such as BERT is to fine-tune it on a smaller dataset. However, one challenge
remains as the fine-tuned model often overfits on smaller datasets. A symptom
of this phenomenon is that irrelevant or misleading words in the sentence,
which are easy to understand for human beings, can substantially degrade the
performance of these finetuned BERT models. In this paper, we propose a novel
technique, called Self-Supervised Attention (SSA) to help facilitate this
generalization challenge. Specifically, SSA automatically generates weak,
token-level attention labels iteratively by probing the fine-tuned model from
the previous iteration. We investigate two different ways of integrating SSA
into BERT and propose a hybrid approach to combine their benefits. Empirically,
through a variety of public datasets, we illustrate significant performance
improvement using our SSA-enhanced BERT model
Towards Robust Referring Image Segmentation
Referring Image Segmentation (RIS) aims to connect image and language via
outputting the corresponding object masks given a text description, which is a
fundamental vision-language task. Despite lots of works that have achieved
considerable progress for RIS, in this work, we explore an essential question,
"what if the description is wrong or misleading of the text description?". We
term such a sentence as a negative sentence. However, we find that existing
works cannot handle such settings. To this end, we propose a novel formulation
of RIS, named Robust Referring Image Segmentation (R-RIS). It considers the
negative sentence inputs besides the regularly given text inputs. We present
three different datasets via augmenting the input negative sentences and a new
metric to unify both input types. Furthermore, we design a new
transformer-based model named RefSegformer, where we introduce a token-based
vision and language fusion module. Such module can be easily extended to our
R-RIS setting by adding extra blank tokens. Our proposed RefSegformer achieves
the new state-of-the-art results on three regular RIS datasets and three R-RIS
datasets, which serves as a new solid baseline for further research. The
project page is at \url{https://lxtgh.github.io/project/robust_ref_seg/}.Comment: technical repor
- …